Unauthorized AI cannot recognize me: Reversible adversarial example

نویسندگان

چکیده

In this study, we propose a new methodology to control how user’s data is recognized and used by AI via exploiting the properties of adversarial examples. For purpose, reversible example (RAE), type example. A remarkable feature RAE that image can be correctly model specified user because authorized recover original from exactly eliminating perturbation. On other hand, unauthorized models cannot recognize it functions as an Moreover, considered one encryption computer vision since reversibility guarantees decryption. To realize RAE, combine three technologies, example, hiding for exact recovery perturbation, selective AIs who remove Experimental results show proposed method achieve comparable attack ability with corresponding similar visual quality image, including white-box attacks black-box attacks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial AI

In recent years AI research has had an increasing role in models and algorithms for security problems. Game theoretic models of security, and Stackelberg security games in particular, have received special attention, in part because these models and associated tools have seen actual deployment in homeland security and sustainability applications. Stackelberg security games have two prototypical...

متن کامل

Beyond Adversarial: The Case for Game AI as Storytelling

As a field, artificial intelligence (AI) has been applied to games for more than 50 years, beginning with traditional two-player adversarial games like tic-tac-toe and chess and extending to modern strategy games, first-person shooters, and social simulations. AI practitionershave become adept at designing algorithms that enable computers to play games at or beyond human levels in many cases. I...

متن کامل

ASP: A Fast Adversarial Attack Example Generation Framework based on Adversarial Saliency Prediction

With the excellent accuracy and feasibility, the Neural Networks (NNs) have been widely applied into the novel intelligent applications and systems. However, with the appearance of the Adversarial Attack, the NN based system performance becomes extremely vulnerable: the image classification results can be arbitrarily misled by the adversarial examples, which are crafted images with human unperc...

متن کامل

Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks

Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications. However, they are shown to be vulnerable to adversarial example attack, which generates malicious output by adding slight perturbations to the input. Previous adversarial example crafting methods, however, use s...

متن کامل

Unauthorized Version

A 'mole' at the Bristol Royal Infirmary tells me that at the last Hospital Committee Meeting it was explained to the assembled consultants that under the rules of the new Special Trust, when they have fulfilled the Provider contract for the number of patients to be seen in a particular week, they will not be allowed to see any more.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pattern Recognition

سال: 2023

ISSN: ['1873-5142', '0031-3203']

DOI: https://doi.org/10.1016/j.patcog.2022.109048